5 research outputs found

    The Interpolated MVU Mechanism For Communication-efficient Private Federated Learning

    Full text link
    We consider private federated learning (FL), where a server aggregates differentially private gradient updates from a large number of clients in order to train a machine learning model. The main challenge is balancing privacy with both classification accuracy of the learned model as well as the amount of communication between the clients and server. In this work, we build on a recently proposed method for communication-efficient private FL -- the MVU mechanism -- by introducing a new interpolation mechanism that can accommodate a more efficient privacy analysis. The result is the new Interpolated MVU mechanism that provides SOTA results on communication-efficient private FL on a variety of datasets

    Asynchronous (Sub)gradient-Push

    No full text
    Presented on April 4, 2018 at 12:00 p.m. in the Marcus Nanotechnology Building, Room 1116.Mike Rabbat is a Research Scientist in the Facebook AI Research group. He is currently on leave from McGill University where he is an Associate Professor of Electrical and Computer Engineering. Mike’s research interests are in the areas of networks, statistical signal processing, and machine learning. Currently, he is working on gossip algorithms for distributed processing, distributed tracking, and algorithms and theory for signal processing on graphs.Runtime: 61:51 minutesWe consider a multi-agent framework for distributed optimization where each agent in the network has access to a local convex function and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents' local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents in the network. When the local functions are strongly-convex with Lipschitz-continuous gradients, we show that a subsequence of the iterates at each agent converges to a neighbourhood of the global minimum, where the size of the neighbourhood depends on the degree of asynchrony in the multi-agent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that Asynchronous Subgradient-Push can minimize the global objective faster than state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size. This is joint work with Mahmoud Assran
    corecore